230 research outputs found

    Learning with Augmented Features for Heterogeneous Domain Adaptation

    Full text link
    We propose a new learning method for heterogeneous domain adaptation (HDA), in which the data from the source domain and the target domain are represented by heterogeneous features with different dimensions. Using two different projection matrices, we first transform the data from two domains into a common subspace in order to measure the similarity between the data from two domains. We then propose two new feature mapping functions to augment the transformed data with their original features and zeros. The existing learning methods (e.g., SVM and SVR) can be readily incorporated with our newly proposed augmented feature representations to effectively utilize the data from both domains for HDA. Using the hinge loss function in SVM as an example, we introduce the detailed objective function in our method called Heterogeneous Feature Augmentation (HFA) for a linear case and also describe its kernelization in order to efficiently cope with the data with very high dimensions. Moreover, we also develop an alternating optimization algorithm to effectively solve the nontrivial optimization problem in our HFA method. Comprehensive experiments on two benchmark datasets clearly demonstrate that HFA outperforms the existing HDA methods.Comment: ICML201

    MiniMax Entropy Network: Learning Category-Invariant Features for Domain Adaptation

    Full text link
    How to effectively learn from unlabeled data from the target domain is crucial for domain adaptation, as it helps reduce the large performance gap due to domain shift or distribution change. In this paper, we propose an easy-to-implement method dubbed MiniMax Entropy Networks (MMEN) based on adversarial learning. Unlike most existing approaches which employ a generator to deal with domain difference, MMEN focuses on learning the categorical information from unlabeled target samples with the help of labeled source samples. Specifically, we set an unfair multi-class classifier named categorical discriminator, which classifies source samples accurately but be confused about the categories of target samples. The generator learns a common subspace that aligns the unlabeled samples based on the target pseudo-labels. For MMEN, we also provide theoretical explanations to show that the learning of feature alignment reduces domain mismatch at the category level. Experimental results on various benchmark datasets demonstrate the effectiveness of our method over existing state-of-the-art baselines.Comment: 8 pages, 6 figure

    Electronic structure of self-assembled InAs/InP quantum dots: A Comparison with self-assembled InAs/GaAs quantum dots

    Full text link
    We investigate the electronic structure of the InAs/InP quantum dots using an atomistic pseudopotential method and compare them to those of the InAs/GaAs QDs. We show that even though the InAs/InP and InAs/GaAs dots have the same dot material, their electronic structure differ significantly in certain aspects, especially for holes: (i) The hole levels have a much larger energy spacing in the InAs/InP dots than in the InAs/GaAs dots of corresponding size. (ii) Furthermore, in contrast with the InAs/GaAs dots, where the sizeable hole pp, dd intra-shell level splitting smashes the energy level shell structure, the InAs/InP QDs have a well defined energy level shell structure with small pp, dd level splitting, for holes. (iii) The fundamental exciton energies of the InAs/InP dots are calculated to be around 0.8 eV (∼\sim 1.55 μ\mum), about 200 meV lower than those of typical InAs/GaAs QDs, mainly due to the smaller lattice mismatch in the InAs/InP dots. (iii) The widths of the exciton PP shell and DD shell are much narrower in the InAs/InP dots than in the InAs/GaAs dots. (iv) The InAs/GaAs and InAs/InP dots have a reversed light polarization anisotropy along the [100] and [11ˉ\bar{1}0] directions

    A core stateless bandwidth broker architecture for scalable support of guaranteed services

    Full text link

    Learning Motion Refinement for Unsupervised Face Animation

    Full text link
    Unsupervised face animation aims to generate a human face video based on the appearance of a source image, mimicking the motion from a driving video. Existing methods typically adopted a prior-based motion model (e.g., the local affine motion model or the local thin-plate-spline motion model). While it is able to capture the coarse facial motion, artifacts can often be observed around the tiny motion in local areas (e.g., lips and eyes), due to the limited ability of these methods to model the finer facial motions. In this work, we design a new unsupervised face animation approach to learn simultaneously the coarse and finer motions. In particular, while exploiting the local affine motion model to learn the global coarse facial motion, we design a novel motion refinement module to compensate for the local affine motion model for modeling finer face motions in local areas. The motion refinement is learned from the dense correlation between the source and driving images. Specifically, we first construct a structure correlation volume based on the keypoint features of the source and driving images. Then, we train a model to generate the tiny facial motions iteratively from low to high resolution. The learned motion refinements are combined with the coarse motion to generate the new image. Extensive experiments on widely used benchmarks demonstrate that our method achieves the best results among state-of-the-art baselines.Comment: NeurIPS 202
    • …
    corecore